Assignment 4: Street Networks & Web Scraping¶

Part 1: Visualizing crash data in Philadelphia

In this section, you will use osmnx to analyze the crash incidence in Center City.

Part 2: Scraping Craigslist

In this section, you will use Selenium and BeautifulSoup to scrape data for hundreds of apartments from Philadelphia's Craigslist portal.

Part 1: Visualizing crash data in Philadelphia¶

In [1]:
import pandas as pd
import geopandas as gpd
import numpy as np
import hvplot.pandas
import holoviews as hv
import panel as pn
from matplotlib import pyplot as plt
import matplotlib.pyplot as plt
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
import pandas as pd
import requests
from time import sleep
import pandas as pd
from bs4 import BeautifulSoup
import altair as alt
import pandas as pd
In [2]:
planningdistricts =  gpd.read_file('Planning_Districts.geojson')
central = planningdistricts.query('DIST_NAME == "Central"')
CC_outline = central.squeeze().geometry
In [3]:
central_geometry = central['geometry'].iloc[0]

1.2 Get the street network graph¶

Use OSMnx to create a network graph (of type 'drive') from your polygon boundary in 1.1.

In [4]:
import osmnx as ox
CC = ox.graph_from_polygon(central_geometry, network_type='drive')
ox.plot_graph(ox.project_graph(CC))
No description has been provided for this image
Out[4]:
(<Figure size 800x800 with 1 Axes>, <Axes: >)

1.3 Convert your network graph edges to a GeoDataFrame¶

Use OSMnx to create a GeoDataFrame of the network edges in the graph object from part 1.2. The GeoDataFrame should contain the edges but not the nodes from the network.

In [5]:
cc_edges = ox.graph_to_gdfs(CC, edges=True, nodes=False)
cc_edges.head()
Out[5]:
osmid oneway name highway reversed length geometry maxspeed lanes bridge ref tunnel width service access junction
u v key
109727439 109911666 0 132508434 True Bainbridge Street residential False 44.137 LINESTRING (-75.17104 39.94345, -75.17053 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN
109727448 109727439 0 12109011 True South Colorado Street residential False 109.484 LINESTRING (-75.17125 39.94248, -75.17120 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN
110034229 0 12159387 True Fitzwater Street residential False 91.353 LINESTRING (-75.17125 39.94248, -75.17137 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN
109727507 110024052 0 193364514 True Carpenter Street residential False 53.208 LINESTRING (-75.17196 39.93973, -75.17134 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN
109728761 110274344 0 672312336 True Brown Street residential False 58.270 LINESTRING (-75.17317 39.96951, -75.17250 39.9... 25 mph NaN NaN NaN NaN NaN NaN NaN NaN
In [6]:
ax = cc_edges.to_crs(epsg=32618).plot(color="gray")
boundary = gpd.GeoSeries([CC_outline], crs="EPSG:4326")
boundary.to_crs(epsg=32618).plot(
    ax=ax, facecolor="none", edgecolor="red", linewidth=3, zorder=2
)
ax.set_axis_off()
No description has been provided for this image

1.4 Load PennDOT crash data¶

Data for crashes (of all types) for 2020, 2021, and 2022 in Philadelphia County is available at the following path:

./data/CRASH_PHILADELPHIA_XXXX.csv

You should see three separate files in the data/ folder. Use pandas to read each of the CSV files, and combine them into a single dataframe using pd.concat().

The data was downloaded for Philadelphia County from here.

In [7]:
crash2020 = pd.read_csv('CRASH_PHILADELPHIA_2020.csv')
crash2021 = pd.read_csv('CRASH_PHILADELPHIA_2021.csv')
crash2022 = pd.read_csv('CRASH_PHILADELPHIA_2022.csv')
crash2020.head()
Out[7]:
CRN ARRIVAL_TM AUTOMOBILE_COUNT BELTED_DEATH_COUNT BELTED_SUSP_SERIOUS_INJ_COUNT BICYCLE_COUNT BICYCLE_DEATH_COUNT BICYCLE_SUSP_SERIOUS_INJ_COUNT BUS_COUNT CHLDPAS_DEATH_COUNT ... WORK_ZONE_TYPE WORKERS_PRES WZ_CLOSE_DETOUR WZ_FLAGGER WZ_LAW_OFFCR_IND WZ_LN_CLOSURE WZ_MOVING WZ_OTHER WZ_SHLDER_MDN WZ_WORKERS_INJ_KILLED
0 2020036588 1349.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 2020036617 1842.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 2020035717 2000.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 2020034378 1139.0 2 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 2020025511 345.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

5 rows × 100 columns

In [8]:
crashes = pd.concat([crash2020, crash2021, crash2022], ignore_index=True)
crashes.head()
Out[8]:
CRN ARRIVAL_TM AUTOMOBILE_COUNT BELTED_DEATH_COUNT BELTED_SUSP_SERIOUS_INJ_COUNT BICYCLE_COUNT BICYCLE_DEATH_COUNT BICYCLE_SUSP_SERIOUS_INJ_COUNT BUS_COUNT CHLDPAS_DEATH_COUNT ... WORK_ZONE_TYPE WORKERS_PRES WZ_CLOSE_DETOUR WZ_FLAGGER WZ_LAW_OFFCR_IND WZ_LN_CLOSURE WZ_MOVING WZ_OTHER WZ_SHLDER_MDN WZ_WORKERS_INJ_KILLED
0 2020036588 1349.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 2020036617 1842.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 2020035717 2000.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 2020034378 1139.0 2 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 2020025511 345.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

5 rows × 100 columns

In [9]:
geometry = gpd.points_from_xy(crashes['DEC_LONG'], crashes['DEC_LAT'])
crashesgdf = gpd.GeoDataFrame(crashes, geometry=geometry, crs='EPSG:32618')
crashesgdf.head()
Out[9]:
CRN ARRIVAL_TM AUTOMOBILE_COUNT BELTED_DEATH_COUNT BELTED_SUSP_SERIOUS_INJ_COUNT BICYCLE_COUNT BICYCLE_DEATH_COUNT BICYCLE_SUSP_SERIOUS_INJ_COUNT BUS_COUNT CHLDPAS_DEATH_COUNT ... WORKERS_PRES WZ_CLOSE_DETOUR WZ_FLAGGER WZ_LAW_OFFCR_IND WZ_LN_CLOSURE WZ_MOVING WZ_OTHER WZ_SHLDER_MDN WZ_WORKERS_INJ_KILLED geometry
0 2020036588 1349.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.179 39.960)
1 2020036617 1842.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.207 39.981)
2 2020035717 2000.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.169 39.927)
3 2020034378 1139.0 2 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.192 39.924)
4 2020025511 345.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.245 39.883)

5 rows × 101 columns

1.6 Trim the crash data to Center City¶

  1. Get the boundary of the edges data frame (from part 1.3). Accessing the .geometry.unary_union.convex_hull property will give you a nice outer boundary region.
  2. Trim the crashes using the within() function of the crash GeoDataFrame to find which crashes are within the boundary.

There should be about 3,750 crashes within the Central district.

In [10]:
CC_crashes = crashesgdf[crashesgdf.geometry.within(cc_edges.geometry.unary_union.convex_hull)]
CC_crashes.head()
C:\Users\cruse\mambaforge1\envs\musa-550-fall-2023\lib\site-packages\shapely\predicates.py:946: RuntimeWarning: invalid value encountered in within
  return lib.within(a, b, **kwargs)
Out[10]:
CRN ARRIVAL_TM AUTOMOBILE_COUNT BELTED_DEATH_COUNT BELTED_SUSP_SERIOUS_INJ_COUNT BICYCLE_COUNT BICYCLE_DEATH_COUNT BICYCLE_SUSP_SERIOUS_INJ_COUNT BUS_COUNT CHLDPAS_DEATH_COUNT ... WORKERS_PRES WZ_CLOSE_DETOUR WZ_FLAGGER WZ_LAW_OFFCR_IND WZ_LN_CLOSURE WZ_MOVING WZ_OTHER WZ_SHLDER_MDN WZ_WORKERS_INJ_KILLED geometry
0 2020036588 1349.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.179 39.960)
7 2020035021 1255.0 1 0 0 0 0 0 1 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.163 39.970)
11 2020021944 805.0 2 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.188 39.952)
12 2020024963 1024.0 1 0 0 0 0 0 0 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.148 39.956)
18 2020000481 1737.0 1 0 0 0 0 0 1 0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN POINT (-75.155 39.953)

5 rows × 101 columns

1.7 Re-project our data into an approriate CRS¶

We'll need to find the nearest edge (street) in our graph for each crash. To do this, osmnx will calculate the distance from each crash to the graph edges. For this calculation to be accurate, we need to convert from latitude/longitude

We'll convert the local state plane CRS for Philadelphia, EPSG=2272

Two steps:¶

  1. Project the graph object (G) using the ox.project_graph. Run ox.project_graph? to see the documentation for how to convert to a specific CRS.
  2. Project the crash data using the .to_crs() function.
In [11]:
cc_projected = ox.project_graph(CC, to_crs=2272)
ox.plot_graph(cc_projected, node_size=0)
No description has been provided for this image
Out[11]:
(<Figure size 800x800 with 1 Axes>, <Axes: >)

1.8 Find the nearest edge for each crash¶

See: ox.distance.nearest_edges(). It takes three arguments:

  • the network graph
  • the longitude of your crash data (the x attribute of the geometry column)
  • the latitude of your crash data (the y attribute of the geometry column)

You will get a numpy array with 3 columns that represent (u, v, key) where each u and v are the node IDs that the edge links together. We will ignore the key value for our analysis.

In [12]:
X = CC_crashes.geometry.x
Y = CC_crashes.geometry.y
nearest_crash = ox.distance.nearest_edges(cc_projected, X, Y)

1.9 Calculate the total number of crashes per street¶

  1. Make a DataFrame from your data from part 1.7 with three columns, u, v, and key (we will only use the u and v columns)
  2. Group by u and v and calculate the size
  3. Reset the index and name your size() column as crash_count

After this step you should have a DataFrame with three columns: u, v, and crash_count.

In [13]:
nearestcrash = pd.DataFrame(nearest_crash, columns=['u', 'v', 'key'])
nearestcrash = nearestcrash.groupby(['u', 'v']).size().reset_index(name='crash_count')
nearestcrash.head()
Out[13]:
u v crash_count
0 110414154 110526399 3751

1.10 Merge your edges GeoDataFrame and crash count DataFrame¶

You can use pandas to merge them on the u and v columns. This will associate the total crash count with each edge in the street network.

Tips:

  • Use a left merge where the first argument of the merge is the edges GeoDataFrame. This ensures no edges are removed during the merge.
  • Use the fillna(0) function to fill in missing crash count values with zero.
In [14]:
merged_crashes = pd.merge(left=cc_edges, right=nearestcrash, how='left', on=['u','v'])
merged_crashes['crash_count'] = merged_crashes['crash_count'].fillna(0)
merged_crashes.head()
Out[14]:
u v osmid oneway name highway reversed length geometry maxspeed lanes bridge ref tunnel width service access junction crash_count
0 109727439 109911666 132508434 True Bainbridge Street residential False 44.137 LINESTRING (-75.17104 39.94345, -75.17053 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.0
1 109727448 109727439 12109011 True South Colorado Street residential False 109.484 LINESTRING (-75.17125 39.94248, -75.17120 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.0
2 109727448 110034229 12159387 True Fitzwater Street residential False 91.353 LINESTRING (-75.17125 39.94248, -75.17137 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.0
3 109727507 110024052 193364514 True Carpenter Street residential False 53.208 LINESTRING (-75.17196 39.93973, -75.17134 39.9... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.0
4 109728761 110274344 672312336 True Brown Street residential False 58.270 LINESTRING (-75.17317 39.96951, -75.17250 39.9... 25 mph NaN NaN NaN NaN NaN NaN NaN NaN 0.0
In [15]:
merged_crashes['crash_count'].sum()
Out[15]:
3751.0

1.11 Calculate a "Crash Index"¶

Let's calculate a "crash index" that provides a normalized measure of the crash frequency per street. To do this, we'll need to:

  1. Calculate the total crash count divided by the street length, using the length column
  2. Perform a log transformation of the crash/length variable — use numpy's log10() function
  3. Normalize the index from 0 to 1 (see the lecture notes for an example of this transformation)

Note: since the crash index involves a log transformation, you should only calculate the index for streets where the crash count is greater than zero.

After this step, you should have a new column in the data frame from 1.9 that includes a column called part 1.9.

In [28]:
merged_crashes['crashovlength'] = merged_crashes['crash_count'] / merged_crashes['length']
merged_crashes['logcrashovlength'] = np.log10(merged_crashes['crashovlength'])
merged_crashes['crash_index'] = (merged_crashes['logcrashovlength'] - merged_crashes['logcrashovlength'].min()) / (merged_crashes['logcrashovlength'].max() - merged_crashes['logcrashovlength'].min())
C:\Users\cruse\mambaforge1\envs\musa-550-fall-2023\lib\site-packages\pandas\core\arraylike.py:402: RuntimeWarning: divide by zero encountered in log10
  result = getattr(ufunc, method)(*inputs, **kwargs)

1.12 Plot a histogram of the crash index values¶

Use matplotlib's hist() function to plot the crash index values from the previous step.

You should see that the index values are Gaussian-distributed, providing justification for why we log-transformed!

In [29]:
merged_crashes['crash_index'] = pd.to_numeric(merged_crashes['crash_index']).fillna(0)
In [30]:
import matplotlib.pyplot as plt

crash_index_values = merged_crashes['crash_index'].dropna()

plt.hist(crash_index_values, bins=30, color='blue', edgecolor='black')
plt.title('Histogram of Normalized Crash Index')
plt.xlabel('Normalized Crash Index')
plt.ylabel('Frequency')
plt.show()
No description has been provided for this image

1.13 Plot an interactive map of the street networks, colored by the crash index¶

You can use GeoPandas to make an interactive Folium map, coloring the streets by the crash index column.

Tip: if you use the viridis color map, try using a "dark" tile set for better constrast of the colors.

In [31]:
merged_crashes = merged_crashes.dropna(subset=['geometry'])
merged_crashes.explore(column='crash_index', cmap='viridis', legend=True)
Out[31]:
Make this Notebook Trusted to load map: File -> Trust Notebook

Part 2: Scraping Craigslist¶

In this part, we'll be extracting information on apartments from Craigslist search results. You'll be using Selenium and BeautifulSoup to extract the relevant information from the HTML text.

For reference on CSS selectors, please see the notes from Week 6.

Primer: the Craigslist website URL¶

We'll start with the Philadelphia region. First we need to figure out how to submit a query to Craigslist. As with many websites, one way you can do this is simply by constructing the proper URL and sending it to Craigslist.

https://philadelphia.craigslist.org/search/apa?min_price=1&min_bedrooms=1&minSqft=1#search=1~gallery~0~0

There are three components to this URL.

  1. The base URL: http://philadelphia.craigslist.org/search/apa

  2. The user's search parameters: ?min_price=1&min_bedrooms=1&minSqft=1

We will send nonzero defaults for some parameters (bedrooms, size, price) in order to exclude results that have empty values for these parameters.

  1. The URL hash: #search=1~gallery~0~0

As we will see later, this part will be important because it contains the search page result number.

The Craigslist website requires Javascript, so we'll need to use Selenium to load the page, and then use BeautifulSoup to extract the information we want.

2.1 Initialize a selenium driver and open Craigslist¶

As discussed in lecture, you can use Chrome, Firefox, or Edge as your selenium driver. In this part, you should do two things:

  1. Initialize the selenium driver
  2. Use the driver.get() function to open the following URL:

https://philadelphia.craigslist.org/search/apa?min_price=1&min_bedrooms=1&minSqft=1#search=1~gallery~0~0

This will give you the search results for 1-bedroom apartments in Philadelphia.

In [32]:
from selenium import webdriver
import requests
In [35]:
driver = webdriver.Chrome()
url = "https://philadelphia.craigslist.org/search/apa?min_price=1&min_bedrooms=1&minSqft=1#search=1gallery0~0"
driver.get(url)

2.2 Initialize your "soup"¶

Once selenium has the page open, we can get the page source from the driver and use BeautifulSoup to parse it. In this part, initialize a BeautifulSoup object with the driver's page source

In [36]:
soup = BeautifulSoup(driver.page_source, "html.parser")

2.3 Parsing the HTML¶

Now that we have our "soup" object, we can use BeautifulSoup to extract out the elements we need:

  • Use the Web Inspector to identify the HTML element that holds the information on each apartment listing.
  • Use BeautifulSoup to extract these elements from the HTML.

At the end of this part, you should have a list of 120 elements, where each element is the listing for a specific apartment on the search page.

In [37]:
listings = soup.select(".cl-search-result")


len(listings)
Out[37]:
120

2.4 Find the relevant pieces of information¶

We will now focus on the first element in the list of 120 apartments. Use the prettify() function to print out the HTML for this first element.

From this HTML, identify the HTML elements that hold:

  • The apartment price
  • The number of bedrooms
  • The square footage
  • The apartment title

For the first apartment, print out each of these pieces of information, using BeautifulSoup to select the proper elements.

In [38]:
listings_ = listings[0]
In [39]:
print(listings_.prettify())
<li class="cl-search-result cl-search-view-mode-gallery" data-pid="7699444485" title="1 BD, Refrigerator, Dishwasher">
 <div class="gallery-card">
  <div class="cl-gallery">
   <div class="gallery-inner">
    <a class="main" href="https://philadelphia.craigslist.org/apa/d/philadelphia-bd-refrigerator-dishwasher/7699444485.html">
     <div class="swipe" style="visibility: visible;">
      <div class="swipe-wrap" style="width: 6144px;">
       <div data-index="0" style="width: 384px; left: 0px; transition-duration: 0ms; transform: translateX(0px);">
        <span class="loading icom-">
        </span>
        <img alt="1 BD, Refrigerator, Dishwasher 1" src="https://images.craigslist.org/00f0f_dvAjysV1qwg_02P021_300x300.jpg"/>
       </div>
       <div data-index="1" style="width: 384px; left: -384px; transition-duration: 0ms; transform: translateX(384px);">
       </div>
       <div data-index="2" style="width: 384px; left: -768px; transition-duration: 0ms; transform: translateX(384px);">
       </div>
       <div data-index="3" style="width: 384px; left: -1152px; transition-duration: 0ms; transform: translateX(384px);">
       </div>
       <div data-index="4" style="width: 384px; left: -1536px; transition-duration: 0ms; transform: translateX(384px);">
       </div>
       <div data-index="5" style="width: 384px; left: -1920px; transition-duration: 0ms; transform: translateX(384px);">
       </div>
       <div data-index="6" style="width: 384px; left: -2304px; transition-duration: 0ms; transform: translateX(384px);">
       </div>
       <div data-index="7" style="width: 384px; left: -2688px; transition-duration: 0ms; transform: translateX(-384px);">
       </div>
      </div>
     </div>
     <div class="slider-back-arrow icom-">
     </div>
     <div class="slider-forward-arrow icom-">
     </div>
    </a>
   </div>
   <div class="dots">
    <span class="dot selected">
     •
    </span>
    <span class="dot">
     •
    </span>
    <span class="dot">
     •
    </span>
    <span class="dot">
     •
    </span>
    <span class="dot">
     •
    </span>
    <span class="dot">
     •
    </span>
    <span class="dot">
     •
    </span>
    <span class="dot">
     •
    </span>
   </div>
  </div>
  <a class="cl-app-anchor text-only posting-title" href="https://philadelphia.craigslist.org/apa/d/philadelphia-bd-refrigerator-dishwasher/7699444485.html" tabindex="0">
   <span class="label">
    1 BD, Refrigerator, Dishwasher
   </span>
  </a>
  <div class="meta">
   9 mins ago
   <span class="separator">
    ·
   </span>
   <span class="housing-meta">
    <span class="post-bedrooms">
     1br
    </span>
    <span class="post-sqft">
     643ft
     <span class="exponent">
      2
     </span>
    </span>
   </span>
   <span class="separator">
    ·
   </span>
   Philadelphia's Avenue of the Arts neighborhood
  </div>
  <span class="priceinfo">
   $2,218
  </span>
  <button class="bd-button cl-favorite-button icon-only" tabindex="0" title="add to favorites list" type="button">
   <span class="icon icom-">
   </span>
   <span class="label">
   </span>
  </button>
  <button class="bd-button cl-banish-button icon-only" tabindex="0" title="hide posting" type="button">
   <span class="icon icom-">
   </span>
   <span class="label">
    hide
   </span>
  </button>
 </div>
</li>

In [40]:
price = listings_.select_one(".priceinfo").text
price
Out[40]:
'$2,218'
In [41]:
numberofbeds=listings_.select_one(".post-bedrooms").text
numberofbeds
Out[41]:
'1br'
In [42]:
sqft = listings_.select_one(".post-sqft").text
sqft
Out[42]:
'643ft2'
In [43]:
apttitle =listings_.select_one(".posting-title span.label").text
apttitle
Out[43]:
'1 BD, Refrigerator, Dishwasher'

2.5 Functions to format the results¶

In this section, you'll create functions that take in the raw string elements for price, size, and number of bedrooms and returns them formatted as numbers.

I've started the functions to format the values. You should finish theses functions in this section.

Hints

  • You can use string formatting functions like string.replace() and string.strip()
  • The int() and float() functions can convert strings to numbers
In [44]:
def format_bedrooms(bedrooms_string):
    nmbrbeds =''.join(n for n in bedrooms_string if n.isdigit())
    nmbrbeds = float(nmbrbeds)
    return nmbrbeds
In [45]:
def format_size(size_string):
    if size_string:
        size = ''.join(char for char in size_string if char.isdigit() or char in ('.', 'e', 'E', '+', '-'))
        return float(size)
In [46]:
def format_price(price_string):
    if price_string:
        price = ''.join(char for char in price_string if char.isdigit() or char in ('.', 'e', 'E', '+', '-'))
        return float(price)

2.6 Putting it all together¶

In this part, you'll complete the code block below using results from previous parts. The code will loop over 5 pages of search results and scrape data for 600 apartments.

We can get a specific page by changing the search=PAGE part of the URL hash. For example, to get page 2 instead of page 1, we will navigate to:

https://philadelphia.craigslist.org/search/apa?min_price=1&min_bedrooms=1&minSqft=1#search=2~gallery~0~0

In the code below, the outer for loop will loop over 5 pages of search results. The inner for loop will loop over the 120 apartments listed on each search page.

Fill in the missing pieces of the inner loop using the code from the previous section. We will be able to extract out the relevant pieces of info for each apartment.

After filling in the missing pieces and executing the code cell, you should have a Data Frame called results that holds the data for 600 apartment listings.

Notes¶

Be careful if you try to scrape more listings. Craigslist will temporarily ban your IP address (for a very short time) if you scrape too much at once. I've added a sleep() function to the for loop to wait 30 seconds between scraping requests.

If the for loop gets stuck at the "Processing page X..." step for more than a minute or so, your IP address is probably banned temporarily, and you'll have to wait a few minutes before trying again.

In [47]:
from time import sleep
In [48]:
from time import sleep
import pandas as pd
from bs4 import BeautifulSoup

results = []

# search in batches of 120 for 5 pages
max_pages = 5

# The base URL we will be using
base_url = "https://philadelphia.craigslist.org/search/apa?min_price=1&min_bedrooms=1&minSqft=1"

# loop over each page of search results
for page_num in range(1, max_pages + 1):
    print(f"Processing page {page_num}...")

    # Update the URL hash for this page number and make the combined URL
    url_hash = f"#search={page_num}~gallery~0~0"
    url = base_url + url_hash

    # Go to the driver and wait for 5 seconds
    driver.get(url)
    sleep(5)

    # YOUR CODE: get the list of all apartments
    # This is the same code from Part 1.2 and 1.3
    # It should be a list of 120 apartments
    
    soup = BeautifulSoup(driver.page_source, "html.parser")
    apts = soup.select(".cl-search-result")
    print("Number of apartments =", len(apts))

    # loop over each apartment in the list
    page_results = []
    for apt in apts:

        # YOUR CODE: the bedrooms string
        numberofbeds = apt.select_one(".post-bedrooms").text

        # YOUR CODE: the size string
        size = apt.select_one(".post-sqft").text

        # YOUR CODE: the title string
        title = apt.select_one(".posting-title span.label").text

        # YOUR CODE: the price string
        price = apt.select_one(".priceinfo").text

        # Format using functions from Part 1.5
        bedrooms = format_bedrooms(numberofbeds)
        size = format_size(size)
        price = format_price(price)

        # Save the result
        page_results.append([price, size, bedrooms, title])

    # Create a dataframe and save
    col_names = ["price", "size", "bedrooms", "title"]
    df = pd.DataFrame(page_results, columns=col_names)
    results.append(df)

    print("Sleeping for 10 seconds between calls")
    sleep(10)

# Finally, concatenate all the results
results = pd.concat(results, axis=0).reset_index(drop=True)
Processing page 1...
Number of apartments = 120
Sleeping for 10 seconds between calls
Processing page 2...
Number of apartments = 120
Sleeping for 10 seconds between calls
Processing page 3...
Number of apartments = 120
Sleeping for 10 seconds between calls
Processing page 4...
Number of apartments = 120
Sleeping for 10 seconds between calls
Processing page 5...
Number of apartments = 120
Sleeping for 10 seconds between calls
In [ ]:
 

2.7 Plotting the distribution of prices¶

Use matplotlib's hist() function to make two histograms for:

  • Apartment prices
  • Apartment prices per square foot (price / size)

Make sure to add labels to the respective axes and a title describing the plot.

In [49]:
plt.hist(df['price'], bins=30, color='blue', edgecolor='black')
plt.title('Apartment prices')
plt.xlabel('Listing Price ($)')
plt.ylabel('Frequency')
plt.show()
No description has been provided for this image
In [50]:
plt.hist(df['price']/df['size'], bins=30, color='blue', edgecolor='black')
plt.title('Apartment prices per Square Foot')
plt.xlabel('Listing Price Per Square Foot($)')
plt.ylabel('Frequency')
plt.show()
No description has been provided for this image

Side note: rental prices per sq. ft. from Craigslist¶

The histogram of price per sq ft should be centered around ~1.5. Here is a plot of how Philadelphia's rents compare to the other most populous cities:

No description has been provided for this image

Source

2.8 Comparing prices for different sizes¶

Use altair to explore the relationship between price, size, and number of bedrooms. Make an interactive scatter plot of price (x-axis) vs. size (y-axis), with the points colored by the number of bedrooms.

Make sure the plot is interactive (zoom-able and pan-able) and add a tooltip with all of the columns in our scraped data frame.

With this sort of plot, you can quickly see the outlier apartments in terms of size and price.

In [51]:
scatter_plot = alt.Chart(results).mark_circle(size=60).encode(
    x=alt.X('price:Q', axis=alt.Axis(title='Price ($)')),
    y=alt.Y('size:Q', axis=alt.Axis(title='Size (sqft)')),
    color=alt.Color('bedrooms:N', scale=alt.Scale(scheme='category10')),
    tooltip=["title", "price:Q", "size:Q", "bedrooms:N"]
).properties(
    width=600,
    height=400
).interactive()

scatter_plot.display()
In [ ]: